Distributed Gradient Tracking Methods with Finite Data Rates

نویسندگان

چکیده

This paper studies the distributed optimization problem over an undirected connected graph subject to digital communications with a finite data rate, where each agent holds strongly convex and smooth cost function. The agents need cooperatively minimize average of all agents’ functions. Each builds encoder/decoder pair that produces transmitted messages its neighbors finite-level uniform quantizer, recovers neighbors’ states by recursive decoder received quantized signals. Combining adaptive scheme gradient tracking method, authors propose algorithm. prove can be achieved at linear even when communicate 1-bit rate. Numerical examples are also conducted illustrate theoretical results.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Stabilization with Finite Data Rates

In this term project, we study the effects of finite communication rates in control problems. Traditionally, the communication channel between the plant and the controller is not modelled. This is because it is assumed that the outputs of the former (noisy or noiseless) are available completely and with infinite precision to the latter. In this work, we study systems where the link between the ...

متن کامل

Distributed Delayed Proximal Gradient Methods

We analyze distributed optimization algorithms where parts of data and variables are distributed over several machines and synchronization occurs asynchronously. We prove convergence for the general case of a nonconvex objective plus a convex and possibly nonsmooth penalty. We demonstrate two challenging applications, `1-regularized logistic regression and reconstruction ICA, and present experi...

متن کامل

A Distributed Stochastic Gradient Tracking Method

In this paper, we study the problem of distributed multi-agent optimization over a network, where each agent possesses a local cost function that is smooth and strongly convex. The global objective is to find a common solution that minimizes the average of all cost functions. Assuming agents only have access to unbiased estimates of the gradients of their local cost functions, we consider a dis...

متن کامل

Optimal Rates for Learning with Nyström Stochastic Gradient Methods

In the setting of nonparametric regression, we propose and study a combination of stochastic gradient methods with Nyström subsampling, allowing multiple passes over the data and mini-batches. Generalization error bounds for the studied algorithm are provided. Particularly, optimal learning rates are derived considering different possible choices of the step-size, the mini-batch size, the numbe...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Journal of Systems Science & Complexity

سال: 2021

ISSN: ['1009-6124', '1559-7067']

DOI: https://doi.org/10.1007/s11424-021-1231-9